Goto

Collaborating Authors

 Ohio


Google DeepMind wants to know if chatbots are just virtue signaling

MIT Technology Review

Google DeepMind is calling for the moral behavior of large language models--such as what they do when called on to act as companions, therapists, medical advisors, and so on--to be scrutinized with the same kind of rigor as their ability to code or do math . As LLMs improve, people are asking them to play more and more sensitive roles in their lives. Agents are starting to take actions on people's behalf. LLMs may be able to influence human decision-making . And yet nobody knows how trustworthy this technology really is at such tasks. With coding and math, you have clear-cut, correct answers that you can check, William Isaac, a research scientist at Google DeepMind, told me when I met him and Julia Haas, a fellow research scientist at the firm, for an exclusive preview of their work, which is published in today. That's not the case for moral questions, which typically have a range of acceptable answers: "Morality is an important capability but hard to evaluate," says Isaac. "In the moral domain, there's no right and wrong," adds Haas.






Leveraging the two-timescale regime to demonstrate convergence of neural networks

Neural Information Processing Systems

Artificial neural networks are among the most successful modern machine learning methods, in particular because their non-linear parametrization provides a flexible way to implement feature learning (see, e.g., Goodfellow et al., 2016, chapter 15).


Reducing Shape-Radiance Ambiguity in Radiance Fields with a Closed-Form Color Estimation Method Qihang Fang 1,2,* Y afei Song 3,* Keqiang Li

Neural Information Processing Systems

A neural radiance field (NeRF) enables the synthesis of cutting-edge realistic novel view images of a 3D scene. It includes density and color fields to model the shape and radiance of a scene, respectively. Supervised by the photometric loss in an end-to-end training manner, NeRF inherently suffers from the shape-radiance ambiguity problem, i.e., it can perfectly fit training views but does not guarantee decoupling the two fields correctly.